Advertisement

1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits

  • Mikhail Startsev
  • Ioannis Agtzidis
  • Michael Dorr
Article

Abstract

Deep learning approaches have achieved breakthrough performance in various domains. However, the segmentation of raw eye-movement data into discrete events is still done predominantly either by hand or by algorithms that use hand-picked parameters and thresholds. We propose and make publicly available a small 1D-CNN in conjunction with a bidirectional long short-term memory network that classifies gaze samples as fixations, saccades, smooth pursuit, or noise, simultaneously assigning labels in windows of up to 1 s. In addition to unprocessed gaze coordinates, our approach uses different combinations of the speed of gaze, its direction, and acceleration, all computed at different temporal scales, as input features. Its performance was evaluated on a large-scale hand-labeled ground truth data set (GazeCom) and against 12 reference algorithms. Furthermore, we introduced a novel pipeline and metric for event detection in eye-tracking recordings, which enforce stricter criteria on the algorithmically produced events in order to consider them as potentially correct detections. Results show that our deep approach outperforms all others, including the state-of-the-art multi-observer smooth pursuit detector. We additionally test our best model on an independent set of recordings, where our approach stays highly competitive compared to literature methods.

Keywords

Eye-movement classification Deep learning Smooth pursuit 

Notes

Acknowledgements

This research was supported by the Elite Network Bavaria, funded by the Bavarian State Ministry for Research and Education.

References

  1. Agtzidis, I., Startsev, M., & Dorr, M. (2016a). In the pursuit of (ground) truth: A hand-labelling tool for eye movements recorded during dynamic scene viewing. In 2016 IEEE second workshop on eye tracking and visualization (ETVIS) (pp. 65–68).Google Scholar
  2. Agtzidis, I., Startsev, M., & Dorr, M. (2016b). Smooth pursuit detection based on multiple observers. In Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ETRA ’16 (pp. 303–306). New York: ACM.Google Scholar
  3. Anantrasirichai, N., Gilchrist, I. D., & Bull, D. R. (2016). Fixation identification for low-sample-rate mobile eye trackers. In 2016 IEEE international conference on image processing (ICIP) (pp. 3126–3130).Google Scholar
  4. Andersson, R., Larsson, L., Holmqvist, K., Stridh, M., & Nyström, M. (2017). One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods, 49(2), 616–637.CrossRefGoogle Scholar
  5. Behrens, F., MacKeben, M., & Schröder-Preikschat, W. (2010). An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters. Behavior Research Methods, 42 (3), 701–708.CrossRefGoogle Scholar
  6. Berg, D.J., Boehnke, S. E., Marino, R.A., Munoz, D. P., & Itti, L. (2009). Free viewing of dynamic stimuli by humans and monkeys. Journal of Vision, 9(5), 1–15.CrossRefGoogle Scholar
  7. Chollet, F., et al. (2015). Keras. https://github.com/keras-team/keras
  8. Collewijn, H., & Tamminga, E. P. (1984). Human eye movements during voluntary pursuit of different target motions on different backgrounds. The Journal of Physiology, 351(1), 217– 250.CrossRefGoogle Scholar
  9. Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. (2015). Long-term recurrent convolutional networks for visual recognition and description. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  10. Dorr, M., Martinetz, T., Gegenfurtner, K. R., & Barth, E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision, 10(10), 28–28.CrossRefGoogle Scholar
  11. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRefGoogle Scholar
  12. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2015). The Pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111 (1), 98–136.CrossRefGoogle Scholar
  13. Goldberg, J. H., & Schryver, J. C. (1995). Eye-gaze-contingent control of the computer interface: Methodology and example for zoom detection. Behavior Research Methods Instruments, & Computers, 27(3), 338–350.CrossRefGoogle Scholar
  14. Hasanpour, S. H., Rouhani, M., Fayyaz, M., & Sabokrou, M. (2016). Lets keep it simple, using simple architectures to outperform deeper and more complex architectures. CoRR, arXiv:1608.06037
  15. Hooge, I. T. C., Niehorster, D. C., Nyström, M., Andersson, R., & Hessels, R. S. (2017). Is human classification by experienced untrained observers a gold standard in fixation detection? Behavior Research Methods.CrossRefGoogle Scholar
  16. Hoppe, S., & Bulling, A. (2016). End-to-end eye movement detection using convolutional neural networks. ArXiv e-prints.Google Scholar
  17. Komogortsev, O. V. (2014). Eye movement classification software. http://cs.txstate.edu/ok11/emd_offline.html
  18. Komogortsev, O. V., Gobert, D. V., Jayarathna, S., Koh, D. H., & Gowda, S. M. (2010). Standardization of automated analyses of oculomotor fixation and saccadic behaviors. IEEE Transactions on Biomedical Engineering, 57(11), 2635–2645.CrossRefGoogle Scholar
  19. Komogortsev, O. V., & Karpov, A. (2013). Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades. Behavior Research Methods, 45(1), 203–215.CrossRefGoogle Scholar
  20. Kyoung Ko, H., Snodderly, D. M., & Poletti, M. (2016). Eye movements between saccades: Measuring ocular drift and tremor. Vision Research, 122, 93–104.CrossRefGoogle Scholar
  21. Land, M. F. (2006). Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research, 25(3), 296–324.CrossRefGoogle Scholar
  22. Larsson, L., Nyström, M., Andersson, R., & Stridh, M. (2015). Detection of fixations and smooth pursuit movements in high-speed eye-tracking data. Biomedical Signal Processing and Control, 18, 145–152.CrossRefGoogle Scholar
  23. Larsson, L., Nyström, M., & Stridh, M. (2013). Detection of saccades and postsaccadic oscillations in the presence of smooth pursuit. IEEE Transactions on Biomedical Engineering, 60(9), 2484–2493.CrossRefGoogle Scholar
  24. Molchanov, P., Yang, X., Gupta, S., Kim, K., Tyree, S., & Kautz, J. (2016). Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  25. Nyström, M. (2015). Marcus Nyström — Humanities Lab, Lund University. http://www.humlab.lu.se/en/person/MarcusNystrom
  26. Nyström, M., & Holmqvist, K. (2010). An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods, 42(1), 188–204.CrossRefGoogle Scholar
  27. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, & R. Garnett (Eds.) Advances in neural information processing systems 28 (pp. 91–99). Curran Associates, Inc.Google Scholar
  28. Rottach, K. G., Zivotofsky, A. Z., Das, V.E., Averbuch-Heller, L., Discenna, A. O., Poonyathalang, A., & Leigh, R. J. (1996). Comparison of horizontal, vertical and diagonal smooth pursuit eye movements in normal human subjects. Vision Research, 36(14), 2189–2195.CrossRefGoogle Scholar
  29. Salvucci, D. D., & Anderson, J. R. (1998). Tracing eye movement protocols with cognitive process models. In Proceedings of the 20th annual conference of the cognitive science society (pp. 923–928). Lawrence Erlbaum Associates Inc.Google Scholar
  30. Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on eye tracking research & applications, ETRA ’00 (pp. 71–78). New York: ACM.Google Scholar
  31. San Agustin, J. (2010). Off-the-shelf gaze interaction. PhD thesis, IT-Universitetet i København.Google Scholar
  32. Santini, T. (2016). Automatic identification of eye movements. http://ti.uni-tuebingen.de/Eye-Movements-Identification.1845.0.html
  33. Santini, T., Fuhl, W., Kübler, T., & Kasneci, E. (2016). Bayesian identification of fixations, saccades, and smooth pursuits. In Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ETRA ’16 (pp. 163–170). New York: ACM.Google Scholar
  34. Sauter, D., Martin, B. J., Di Renzo, N., & Vomscheid, C. (1991). Analysis of eye tracking movements using innovations generated by a Kalman filter. Medical and Biological Engineering and Computing, 29(1), 63–69.CrossRefGoogle Scholar
  35. Startsev, M., Agtzidis, I., & Dorr, M. (2016). Smooth pursuit. http://michaeldorr.de/smoothpursuit/
  36. Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 26–31.Google Scholar
  37. Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. In 2015 IEEE international conference on computer vision (ICCV) (pp. 4489–4497).Google Scholar
  38. van der Lans, R., Wedel, M., & Pieters, R. (2011). Defining eye-fixation sequences across individuals and tasks: The binocular-individual threshold (BIT) algorithm. Behavior Research Methods, 43(1), 239–257.CrossRefGoogle Scholar
  39. Vidal, M., Bulling, A., & Gellersen, H. (2012). Detection of smooth pursuits using eye movement shape features. In Proceedings of the symposium on eye tracking research & applications, ETRA ’12 (pp. 177–180). New York: ACM.Google Scholar
  40. Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19(9), 1395–1407.CrossRefGoogle Scholar
  41. Yarbus, A. L. (1967) Eye movements during perception of moving objects (pp. 159–170). Boston: Springer.Google Scholar
  42. Zemblys, R., Niehorster, D. C., Komogortsev, O., & Holmqvist, K. (2017). Using machine learning to detect events in eye-tracking data. Behavior Research Methods, 50(1), 160–181.  https://doi.org/10.3758/s13428-017-0860-3 CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2018

Authors and Affiliations

  • Mikhail Startsev
    • 1
  • Ioannis Agtzidis
    • 1
  • Michael Dorr
    • 1
  1. 1.Technical University of MunichInstitute for Human-Machine CommunicationMunichGermany

Personalised recommendations