KINterestTV - Towards Non–invasive Measure of User Interest While Watching TV

  • Julien Leroy
  • François Rocca
  • Matei Mancas
  • Radhwan Ben Madhkour
  • Fabien Grisard
  • Tomas Kliegr
  • Jaroslav Kuchar
  • Jakub Vit
  • Ivan Pirner
  • Petr Zimmermann
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 425)

Abstract

Is it possible to determine only by observing the behavior of a user what are his interests for a media? The aim of this project is to develop an application that can detect whether or not a user is viewing a content on the TV and use this information to build the user profile and to make it evolve dynamically. Our approach is based on the use of a 3D sensor to study the movements of a user’s head to make an implicit analysis of his behavior. This behavior is synchronized with the TV content (media fragments) and other user interactions (clicks, gestural interaction) to further infer viewer’s interest. Our approach is tested during an experiment simulating the attention changes of a user in a scenario involving second screen (tablet) interaction, a behavior that has become common for spectators and a typical source of attention switches.

Keywords

user tracking face detection face direction face tracking visual attention interest TV gesture 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Linkedtv project, http://www.linkedtv.eu
  2. 2.
  3. 3.
    Abe, K., Makikawa, M.: Spatial setting of visual attention and its appearance in head-movement. IFMBE Proceedings 25/4, 1063–1066 (2010), http://dx.doi.org/10.1007/978-3-642-03882-2_283 CrossRefGoogle Scholar
  4. 4.
    Aldoma, A.: 3D face detection and pose estimation in pcl (September 2012)Google Scholar
  5. 5.
    Bailly, G., Vo, D.B., Lecolinet, E., Guiard, Y.: Gesture-aware remote controls: Guidelines and interaction technique. In: Proceedings of the 13th International Conference on Multimodal Interfaces, ICMI 2011, pp. 263–270. ACM, New York (2011), http://doi.acm.org/10.1145/2070481.2070530 Google Scholar
  6. 6.
    Bettens, F., Todoroff, T.: Real-time dtw-based gesture recognition external object for max/msp and puredata. In: Proc. SMC 2009, pp. 30–35 (2009)Google Scholar
  7. 7.
    Bevilacqua, F., Guédy, F., Schnell, N., Fléty, E., Leroy, N.: Wireless sensor interface and gesture-follower for music pedagogy. In: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME 2007, pp. 124–129. ACM, New York (2007), http://doi.acm.org/10.1145/1279740.1279762 Google Scholar
  8. 8.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001), http://dx.doi.org/10.1023/A%3A1010933404324 CrossRefMATHGoogle Scholar
  9. 9.
    Daugman, J.G.: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Am. A 2(7), 1160–1169 (1985), http://josaa.osa.org/abstract.cfm?URI=josaa-2-7-1160 CrossRefGoogle Scholar
  10. 10.
    Fanelli, G., Gall, J., Van Gool, L.: Real time head pose estimation with random regression forests. Cvpr 2011, 617–624 (2011), http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5995458 Google Scholar
  11. 11.
    Fanelli, G., Gall, J., Van Gool, L.: Real time 3d head pose estimation: Recent achievements and future challenges. In: 2012 5th International Symposium on Communications Control and Signal Processing (ISCCSP), pp. 1–4 (2012)Google Scholar
  12. 12.
    Fanelli, G., Dantone, M., Gall, J., Fossati, A., Gool, L.: Random Forests for Real Time 3D Face Analysis. International Journal of Computer Vision 101(3), 437–458 (2012), http://link.springer.com/10.1007/s11263-012-0549-0 CrossRefGoogle Scholar
  13. 13.
    Fanelli, G., Dantone, M., Gall, J., Fossati, A., Gool, L.: Random forests for real time 3d face analysis. International Journal of Computer Vision 101, 437–458 (2013), http://dx.doi.org/10.1007/s11263-012-0549-0 CrossRefGoogle Scholar
  14. 14.
    Fanelli, G., Weise, T., Gall, J., Van Gool, L.: Real time head pose estimation from consumer depth cameras. In: Mester, R., Felsberg, M. (eds.) DAGM 2011. LNCS, vol. 6835, pp. 101–110. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  15. 15.
    Venturini, F., Marshall, C., Di Alberto, E.: Hearts, minds and wallets winning the battle for consumer trust accenture video-over-internet consumer survey (2012)Google Scholar
  16. 16.
    Frisson, C., Keyaerts, G., Grisard, F., Dupont, S., Ravet, T., Zajga, F., Colmenares-Guerra, L., Todoroff, T., Dutoit, T.: Mashtacycle: On-stage improvised audio collage by contentbased similarity and gesture recognition. In: 5th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN (2013)Google Scholar
  17. 17.
    Gaschler, A., Jentzsch, S., Giuliani, M., Huth, K., de Ruiter, J., Knoll, A.: Social behavior recognition using body posture and head pose for human-robot interaction. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2128–2133 (October 2012), http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6385460
  18. 18.
    Kistler, F.: Fubi- full body interaction framework (2011), http://www.informatik.uni-augsburg.de/lehrstuehle/hcm/projects/tools/fubi/
  19. 19.
    Kistler, F., Endrass, B., Damian, I., Dang, C., Andr, E.: Natural interaction with culturally adaptive virtual characters. Journal on Multimodal User Interfaces, 1–9, http://dx.doi.org/10.1007/s12193-011-0087-z, doi:10.1007/s12193-011-0087-z
  20. 20.
    Kistler, F., Sollfrank, D., Bee, N., André, E.: Full body gestures enhancing a game book for interactive story telling. In: Si, M., Thue, D., André, E., Lester, J.C., Tanenbaum, J., Zammitto, V. (eds.) ICIDS 2011. LNCS, vol. 7069, pp. 207–218. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  21. 21.
    Kuchař, J., Kliegr, T.: Gain: Web service for user tracking and preference learning - a smart tv use case. In: Proceedings of the 7th ACM Conference on Recommender Systems, RecSys 2013, pp. 467–468. ACM, New York (2013), http://doi.acm.org/10.1145/2507157.2508217 CrossRefGoogle Scholar
  22. 22.
    Leroy, J., Rocca, F., Mancaş, M., Gosselin, B.: 3D head pose estimation for tv setups. In: Mancas, M., d’ Alessandro, N., Siebert, X., Gosselin, B., Valderrama, C., Dutoit, T. (eds.) Intetain. LNICST, vol. 124, pp. 55–64. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  23. 23.
    Leroy, J., Rocca, F., Mancas, M., Gosselin, B.: Second screen interaction: An approach to infer tv watcher’s interest using 3d head pose estimation. In: Proceedings of the 22nd International Conference on World Wide Web Companion, WWW 2013 Companion, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, pp. 465–468 (2013)Google Scholar
  24. 24.
    Microsoft: Kinect sensor, http://www.xbox.com/kinect
  25. 25.
    Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation in computer vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(4), 607–626 (2009), http://www.ncbi.nlm.nih.gov/pubmed/19229078 CrossRefGoogle Scholar
  26. 26.
  27. 27.
    Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin, B., Dutoit, T.: Rare2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication 28(6), 642–658 (2013), http://www.sciencedirect.com/science/article/pii/S0923596513000489 Google Scholar
  28. 28.
    Vatavu, R.: A comparative study of user-defined handheld vs. freehand gestures for home entertainment environments. Journal of Ambient Intelligence and Smart EnvironmentsGoogle Scholar
  29. 29.
    Vatavu, R.D.: User-defined gestures for free-hand tv control. In: Proceedings of the 10th European Conference on Interactive Tv and Video, EuroiTV 2012, pp. 45–48. ACM, New York (2012), http://doi.acm.org/10.1145/2325616.2325626 Google Scholar
  30. 30.
    Vinciarelli, A., Pantic, M., Bourlard, H.: Social signal processing: Survey of an emerging domain. Image and Vision Computing 27(12), 1743–1759 (2009), http://www.sciencedirect.com/science/article/pii/S0262885608002485 CrossRefGoogle Scholar
  31. 31.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2009, pp. 1083–1092. ACM, New York (2009), http://doi.acm.org/10.1145/1518701.1518866 Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2014

Authors and Affiliations

  • Julien Leroy
    • 1
  • François Rocca
    • 1
  • Matei Mancas
    • 1
  • Radhwan Ben Madhkour
    • 1
  • Fabien Grisard
    • 1
  • Tomas Kliegr
    • 2
  • Jaroslav Kuchar
    • 2
    • 3
  • Jakub Vit
    • 4
  • Ivan Pirner
    • 4
  • Petr Zimmermann
    • 4
  1. 1.TCTS Lab.University of MonsBelgium
  2. 2.Department of Information and Knowledge EngineeringUniversity of EconomicsPrague
  3. 3.Web Engineering Group Faculty of Information TechnologyCzech Technical UniversityPrague
  4. 4.Faculty of Applied ScienceUniversity of West BohemiaPilsen

Personalised recommendations