Advertisement

CrowdWatcher: an open-source platform to catch the eye of the crowd

  • Pierre LebretonEmail author
  • Isabelle Hupont
  • Matthias Hirth
  • Toni Mäki
  • Evangelos Skodras
  • Anton Schubert
  • Alexander Raake
Research Article

Abstract

This paper presents the open-source eye tracking platform CrowdWatcher. It enables researchers to measure gaze location and user engagement in a crowdsourcing context through traditional RGB webcams. The proposed platform particularly advances the field of Quality of Experience (QoE) research, as it allows the experimenter to collect remotely and with very limited effort novel information from crowds of participants, such as their commitment towards a task, attention and decision-making processes. Two different experiments are described that were conducted to demonstrate the platform’s potential. The first experiment addresses the measurement of participants’ behavior while performing a movie selection task. Results show that the platform provides complementary information to traditional self-reported data by taking gaze analysis into account. This is of particular relevance, since in a crowdsourcing context decision processes and attention are difficult to assess, and there is often limited control over the engagement of the test user with the task. A second experiment is conducted in the scenario of a multimedia QoE test. Prediction accuracy is compared to a professional infrared eye tracker. While CrowdWatcher performs less well than the professional eye tracker, it is still able to collect valuable gaze information in the far more challenging environment of crowdsourcing. As an outlook to further application domains, the usage of the platform to measure user engagement allows participants who do not pay attention to the task to be identified.

Keywords

Eye tracking Crowdsourcing Engagement User behavior Quality of experience Human–computer interaction 

Notes

Acknowledgements

The authors thank Microworkers.com for sponsoring some of the crowdsourcing experiments. The research leading to these results received funding from the Deutsche Forschungsgemeinschaft (DFG) under Grants HO4770/2-2, TR257/38-2.

References

  1. 1.
    Agaian SS, Lentz KP, Grigoryan AM (2000) A new measure of image enhancement. In: International conference on signal processing & communicationGoogle Scholar
  2. 2.
    Akamine WY, Farias MC (2014) Incorporating visual attention models into video quality metrics. In: SPIE-IS&T electronic imaging—image quality and system performance, vol 9016Google Scholar
  3. 3.
    Alnajar F, Gevers T, Valenti R, Ghebreab S (2013) Calibration-free gaze estimation using human gaze patterns. In: IEEE international conference on computer vision, pp 137–144Google Scholar
  4. 4.
    Bielikova M, Konopka M, Simko J, Moro R, Tvarozek J, Hlavac P, Kuric E (2018) Eye-tracking en masse: group user studies, lab infrastructure, and practices. J Eye Mov Res 11(3):6Google Scholar
  5. 5.
    Blignaut P (2010) Visual span and other parameters for the generation of heatmaps. In: Symposium on eye-tracking research & applications, pp 125–128Google Scholar
  6. 6.
    Bylinskii Z, Judd T, Oliva A, Torralba A, Durand F (2018) What do different evaluation metrics tell us about saliency models? IEEE Trans Pattern Anal Mach Intell 3:740–757Google Scholar
  7. 7.
    Camgaze: Eye tracking in visible light from a webcam. https://github.com/wallarelvo/camgaze. Accessed Feb 2019
  8. 8.
    Carrasco M (2011) Visual attention: the past 25 years. Vis Res 51(13):1484–1525CrossRefGoogle Scholar
  9. 9.
    Charness N, Dijkstra K, Jastrzembski T, Weaver S, Champion M (2008) Monitor viewing distance for younger and older workers. In: Human factors and ergonomics society annual meeting, vol 52, pp 1614–1617Google Scholar
  10. 10.
    Cheng S, Sun Z, Ma X, Forlizzi JL, Hudson SE, Dey A (2015) Social eye tracking: Gaze recall with online crowds. In: 18th ACM conference on computer supported cooperative work & social computing, pp 454–463Google Scholar
  11. 11.
    Choi IH, Jeong CH, Kim YG (2016) Tracking a driver’s face against extreme head poses and inference of drowsiness using a Hidden Markov Model. Appl Sci 6(5):137CrossRefGoogle Scholar
  12. 12.
    CrowdWatcher: An open source platform to catch the eye of the crowd. https://github.com/Telecommunication-Telemedia-Assessment/CrowdWatcher
  13. 13.
    CVC: CVC eye tracker. https://github.com/tiendan/OpenGazer. Accessed Feb 2019
  14. 14.
    De Vreede T, Nguyen C, De Vreede GJ, Boughzala I, Oh O, Reiter-Palmon R (2013) A theoretical model of user engagement in crowdsourcing. In: International conference on collaboration and technology, pp 94–109Google Scholar
  15. 15.
    Drouard V, Horaud R, Deleforge A, Ba S, Evangelidis G (2017) Robust head-pose estimation based on partially-latent mixture of linear regressions. IEEE Trans Image Process 26(3):1428–1440MathSciNetCrossRefGoogle Scholar
  16. 16.
    Egger-Lampl S, Redi J, Hoßfeld T, Hirth M, Möller S, Naderi B, Keimel C, Saupe D (2017) Crowdsourcing quality of experience experiments. In: Archambault D, Purchase H, Hoßfeld T (eds) Evaluation in the crowd. Crowdsourcing and human-centered experiments. Springer, Berlin, pp 154–190CrossRefGoogle Scholar
  17. 17.
    Engelke U, Barkowsky M, Callet PL, Zepernick HJ (2010) Modelling saliency awareness for objective video quality assessment. In: International workshop on quality of multimedia experienceGoogle Scholar
  18. 18.
    Engelke U, Pepion R, Callet PL, Zepernick HJ (2010) Linking distortion perception and visual saliency in h.264/avc coded video containing packet loss. In: SPIE 7744, Visual communications and image processingGoogle Scholar
  19. 19.
    Engelke U, Zepernick HJ (2010) A framework for optimal region-of-interest based quality assessment in wireless imaging. J Electron Imaging 19(1):1–13CrossRefGoogle Scholar
  20. 20.
    EyeLink: 1000 Plus Eye Tracker. https://www.sr-research.com/products/eyelink-1000-plus/. Accessed Feb 2019
  21. 21.
    EyeTribe: The Eye Tribe eye tracker. http://theeyetribe.com/theeyetribe.com/about/index.html. Accessed Feb 2019
  22. 22.
    Ferhat O, Vilariño F (2016) Low cost eye tracking: the current panorama. Comput Intell Neurosci 5:2–14Google Scholar
  23. 23.
    Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetCrossRefGoogle Scholar
  24. 24.
    Gadiraju U, Kawase R, Dietze S, Demartini G (2015) Understanding malicious behavior in crowdsourcing platforms: the case of online surveys. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, pp 1631–1640Google Scholar
  25. 25.
    Gadiraju U, Möller S, Nöllenburg M, Saupe D, Egger-Lampl S, Archambault D, Fisher B (2017) Crowdsourcing versus the laboratory: towards human-centered experiments using the crowd. In: Archambault D, Purchase H, Hoßfeld T (eds) Evaluation in the crowd. Crowdsourcing and human-centered experiments. Springer, Berlin, pp 6–26CrossRefGoogle Scholar
  26. 26.
    GazeHawk: Eye tracking for everyone. http://www.gazehawk.com/. Accessed Feb 2019
  27. 27.
    Gazepoint: Eye tracking systems. https://www.gazept.com/. Accessed Feb 2019
  28. 28.
    Glas N, Pelachaud C (2015) Definitions of engagement in human–agent interaction. In: International workshop on engagment in human computer interaction, pp 944–949Google Scholar
  29. 29.
    Gomez S, Jianu R, Cabeen R, Guo H, Laidlaw D (2016) Fauxvea: crowdsourcing gaze location estimates for visualization analysis tasksGoogle Scholar
  30. 30.
    Grier RA (2004) Visual attention and web design. Ph.D. Thesis, University of Cincinnati, Cincinnati, USAGoogle Scholar
  31. 31.
    Hansen DW, Ji Q (2010) In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans Pattern Anal Mach Intell 32(3):478–500CrossRefGoogle Scholar
  32. 32.
    Hauser DJ, Schwarz N (2016) Attentive turkers: Mturk participants perform better on online attention checks than do subject pool participants. Behavior Res Methods 48(1):400–407CrossRefGoogle Scholar
  33. 33.
    Hernandez J, Liu Z, Hulten G, DeBarr D, Krum K, Zhang Z (2013) Measuring the engagement level of tv viewers. In: IEEE international conference on automatic face and gesture recognition, pp. 1–7Google Scholar
  34. 34.
    Hirth M, Hoßfeld T, Mellia M, Schwartz C, Lehrieder F (2015) Crowdsourced network measurements: benefits and best practices. Comput Netw 90:85–98CrossRefGoogle Scholar
  35. 35.
    Hossfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, Tran-Gia P (2014) Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. Trans Multimedia 16:541–558CrossRefGoogle Scholar
  36. 36.
    Huang J, White RW, Buscher G (2012) User see, user point: gaze and cursor alignment in web search. In: Conference on human factors in computing systemsGoogle Scholar
  37. 37.
    ITU: Open source gaze tracking library. https://sourceforge.net/projects/gazetrackinglib/. Accessed Feb 2019
  38. 38.
    Janowski L, Papir Z (2009) Modeling subjective tests of quality of experience with a generalized linear model. In: International workshop on quality of multimedia experienceGoogle Scholar
  39. 39.
    Keimel C, Habigt J, Diepold K (2012) Challenges in crowd-based video quality assessment. In: Forth international workshop on quality of multimedia experience (QoMEX 2012), pp 13–18Google Scholar
  40. 40.
    Kim NW, Bylinskii Z, Borkin MA, Gajos KZ, Oliva A, Durand F, Pfister H (2017) Bubbleview: an interface for crowdsourcing image importance maps and tracking visual attention. ACM Trans Comput Hum Interact 24(5):36CrossRefGoogle Scholar
  41. 41.
    Lebreton P, Hupont I, Mäki T, Skodras E, Hirth M (2015) Eye tracker in the wild, the delta between what is said and done in a crowdsourcing experiment. In: International ACM workshop on crowdsourcing for multimedia. Brisbane, AustraliaGoogle Scholar
  42. 42.
    Lebreton P, Mäki T, Skodras E, Hupont I, Hirth M (2015) Bridging the gap between eye tracking and crowdsourcing. In: SPIE 9394, Human vision and electronic imaging XXGoogle Scholar
  43. 43.
    Lindgaard G, Fernandes G, Dudek C, Brown J (2006) Attention web designers: you have 50 milliseconds to make a good first impression!. Behav Inf Technol 25(2):115–126CrossRefGoogle Scholar
  44. 44.
    Lu Z, Lin W, Ong E, Yang X, Yao S (2003) PQSM-based RR and NR video quality metrics. In: International society for optical engineering (SPIE), vol 5150, pp 633–640Google Scholar
  45. 45.
    Lyu J, Yuan Z, Chen D (2018) Long-term multi-granularity deep framework for driver drowsiness detection. arXiv preprint arXiv:1801.02325Google Scholar
  46. 46.
    Mancas M, Ferrera VP (2016) How to measure attention? In: From human attention to computational attention, pp 21–38Google Scholar
  47. 47.
    Mao A, Kamar E, Horvitz E (2013) Why stop now? Predicting worker engagement in online crowdsourcing. In: AAAI conference on human computation and crowdsourcingGoogle Scholar
  48. 48.
    Martin D, Carpendale S, Gupta N, Hoßfeld T, Naderi B, Redi J, Siahaan E, Wechsung I (2017) Understanding the crowd: ethical and practical matters in the academic use of crowdsourcing. In: Archambault D, Purchase H, Hoßfeld T (eds) Evaluation in the crowd. Crowdsourcing and human-centered experiments. Springer, Berlin, pp 27–69CrossRefGoogle Scholar
  49. 49.
    Meur OL, Ninassi A, Callet PL, Barba D (2010) Overt visual attention for free-viewing and quality assessment tasks impact of the regions of interest on a video quality metric. Signal Process Image Commun 25:547–558CrossRefGoogle Scholar
  50. 50.
    NEUROTechnology: SentiGaze SDK. http://www.neurotechnology.com/sentigaze.html. Accessed Feb 2019
  51. 51.
    Ninassi A, Meur OL, Callet PL, Barba D, Tirel A (2006) Task impact on the visual attention in subjective image quality assessment. In: European signal processing conferenceGoogle Scholar
  52. 52.
    Oliveira L, Cardoso JS, Lourenço A, Ahlström C (2018) Driver drowsiness detection: a comparison between intrusive and non-intrusive signal acquisition methods. In: 7th European workshop on visual information processing (EUVIP), pp 1–6Google Scholar
  53. 53.
    OpenGazer: Open-source gaze tracker for ordinary webcams. http://www.inference.phy.cam.ac.uk/opengazer/. Accessed Feb 2019
  54. 54.
    Papoutsaki A, Sangkloy P, Laskey J, Daskalova N, Huang J, Hays J (2016) Webgazer: scalable webcam eye tracking using user interactions. In: International joint conference on artificial intelligence, pp 3839–3845Google Scholar
  55. 55.
    Peters C, Castellano G, de Freitas S (2009) An exploration of user engagement in HCI. In: International workshop on affective-aware virtual agents and social robots, p 9Google Scholar
  56. 56.
    Poletti M, Rucci M (2016) A compact field guide to the study of microsaccades: challenges and functions. Vis Res 118:83–97CrossRefGoogle Scholar
  57. 57.
    PrincetonVision: TurkerGaze GitHub repository. https://github.com/PrincetonVision/TurkerGaze. Accessed Feb 2019
  58. 58.
    PupilLabs: Platform for eye tracking and egocentric vision research. https://pupil-labs.com/pupil/. Accessed Feb 2019
  59. 59.
    Redi JA, Povoa I (2013) The role of visual attention in the aesthetic appeal of consumer images: a preliminary study. In: Visual communications and image processingGoogle Scholar
  60. 60.
    Rempel D, Willms K, Anshel J, Jaschinski W, Sheedy J (2007) The effects of visual display distance on eye accommodation, head posture, and vision and neck symptoms. Hum Factors 49(5):830–838CrossRefGoogle Scholar
  61. 61.
    Riegler M, Eg R, Calvet L, Lux M, Halvorsen P, Griwodz C (2015) Playing around the eye tracker—a serious game based dataset. In: GamifIR, pp 34–40Google Scholar
  62. 62.
    Rodden K, Fu X, Aula A, Spiro I (2008) Eye-mouse coordination patterns on web search results pages. In: CHI’08 extended abstracts on Human factors in computing systems, pp 2997–3002Google Scholar
  63. 63.
    Rudoy D, Goldman D, Shechtman E, Zelnik-Manor L (2012) Crowdsourcing gaze data collection. In: Collective intelligence conferenceGoogle Scholar
  64. 64.
    Salam H, Celiktutan O, Hupont I, Gunes H, Chetouani M (2016) Fully automatic analysis of engagement and its relationship to personality in human-robot interactions. IEEE Access 5:705–721CrossRefGoogle Scholar
  65. 65.
    Salam H, Chetouani M (2015) A multi-level context-based modeling of engagement in human-robot interaction. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG), vol 3. IEEE, pp 1–6Google Scholar
  66. 66.
    Savino PJ, Danesh-Meyer HV (2012) Color Atlas and Synopsis of Clinical Ophthalmology-Wills Eye Institute-Neuro-Ophthalmology. Lippincott Williams & Wilkins, PhiladelphiaGoogle Scholar
  67. 67.
    SightCorp: InSight SDK. http://sightcorp.com/insight/. Accessed Feb 2019
  68. 68.
    Simko J, Bielikova M (2015) Gaze-tracked crowdsourcing. In: International workshop on semantic and social media adaptation and personalization, pp 1–5Google Scholar
  69. 69.
    Sticky: Visual Measurement Tool. https://sticky.ai/. Accessed Feb 2019
  70. 70.
    Stiefelhagen R (2002) Tracking focus of attention in meetings. In: IEEE international conference on multimodal interfaces, p 273Google Scholar
  71. 71.
    Sugano Y, Matsushita Y, Sato Y, Koike H (2015) Appearance-based gaze estimation with online calibration from mouse operations. IEEE Trans Hum Mach Syst 45(6):750–760CrossRefGoogle Scholar
  72. 72.
    Tobii: Eye tracking products. https://www.tobii.com/. Accessed Feb 2019
  73. 73.
    Viola P, Jones M (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154CrossRefGoogle Scholar
  74. 74.
  75. 75.
    WebGazer: WebGazer libraryGoogle Scholar
  76. 76.
    Wood E, Baltrusaitis T, Zhang X, Sugano Y, Robinson P, Bulling A (2015) Rendering of eyes for eye-shape registration and gaze estimation. In: IEEE international conference on computer visionGoogle Scholar
  77. 77.
    xLabs: xLabs SDK for eye, gaze and head tracking. http://xlabsgaze.com/. Accessed Feb 2019
  78. 78.
    Xu P, Ehinger KA, Zhang Y, Finkelstein A, Kulkarni SR, Xiao J (2015) TurkerGaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755Google Scholar
  79. 79.
    You J (2013) Attention driven visual QOE: mechanism and methodologies. In: International conference on signal and information processing (ChinaSIP)Google Scholar
  80. 80.
    Zielinski P, NetGazer. https://sourceforge.net/projects/netgazer/. Accessed Feb 2019

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Group of Networked Sensing and Control (NeSC)Zhejiang UniversityHangzhouChina
  2. 2.Institute of Intelligent Systems and RoboticsSorbonne UniversityParisFrance
  3. 3.User-Centric Analysis of Multimedia Data GroupTechnische Universität IlmenauIlmenauGermany
  4. 4.Department of Computer ScienceAalto UniversityEspooFinland
  5. 5.Department of Electrical and Computer EngineeringUniversity of PatrasPatrasGreece
  6. 6.Audio Visual Technology GroupTechnische Universität IlmenauIlmenauGermany

Personalised recommendations