An Analysis of Accuracy Requirements for Automatic Eyetracker Recalibration at Runtime

  • Florian van de Camp
  • Dennis Gill
  • Jutta Hild
  • Jürgen Beyerer
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 618)


The initial calibration of an eye-tracker is a crucial step to provide accurate gaze data, often as a position on a screen. Issues influencing the calibration such as the user’s pose can change while using the eye tracker. Hence, recalibration might often be necessary but at the expense of interrupting the user executing the working task. Monitoring interactions such as clicks on a target or detecting salient objects could provide recalibration points without deliberate user interaction. To gain insight into how accurate recalibration points must be localized to ensure that gaze estimation accuracy is improved, we conducted a user study and examined the effect of correct as well as erroneous localization of recalibration points. The results show that even a localization error of 1.2 degrees of visual angle induces an error of less than \(0.5^{\circ }\) to the estimated gaze position on screen. Our results indicate the necessary requirements any method automatically providing recalibration points has to fulfill.



The underlying projects to this article are funded by the WTD 81 of the German Federal Ministry of Defense as well as by Fraunhofer IOSB in-house funding. The authors are responsible for the content of this article.


  1. [BHND14]
    Blignaut, P., Holmqvist, K., Nystrm, M., Dewhurst, R.: Improving the accuracy of video-based eye tracking in real time through post-calibration regression. In: Current Trends in Eye Tracking Research, pp. 77–100 (2014)Google Scholar
  2. [BMvdC+16]
    Balthasar, S., Martin, M., van de Camp, F., Hild, J., Beyerer, J.: Combining low-cost eye trackers for dual monitor eye tracking. In: HCI International (HCII) (2016)Google Scholar
  3. [HH02]
    Hornof, A.J., Halverson, T.: Cleaning up systematic error in eye-tracking data by using required fixation locations. Behav. Res. Methods Instrum. Comput. 34(4), 592–604 (2002)CrossRefGoogle Scholar
  4. [HJ10]
    Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 478–500 (2010)CrossRefGoogle Scholar
  5. [HMAvdW11]
    Holmqvist, K., Nyström, M., Andersson, R., van de Weijer, J.: Participants know best: the effect of calibration method on data quality. InVision Science Society (2011)Google Scholar
  6. [Jac90]
    Jacob, R.J.: What you look at is what you get: eye movement-based interaction techniques. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 11–18 (1990)Google Scholar
  7. [LKRK16]
    Lander, C., Kerber, F., Rauber, T., Krger, A.: A time-efficient re-calibration algorithm for improved long-term accuracy of head-worn eye trackers. In: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pp. 213–216 (2016)Google Scholar
  8. [LYS+11]
    Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33(2), 353–367 (2011)CrossRefGoogle Scholar
  9. [Tat07]
    Tatler, B.W.: The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7(14), 1–17 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Florian van de Camp
    • 2
  • Dennis Gill
    • 1
  • Jutta Hild
    • 2
  • Jürgen Beyerer
    • 1
    • 2
  1. 1.Karlsruhe Institute of Technology (KIT)KarlsruheGermany
  2. 2.Fraunhofer IOSBKarlsruheGermany

Personalised recommendations