A 3D Morphable Eye Region Model for Gaze Estimation

  • Erroll WoodEmail author
  • Tadas Baltrušaitis
  • Louis-Philippe Morency
  • Peter Robinson
  • Andreas Bulling
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9905)


Morphable face models are a powerful tool, but have previously failed to model the eye accurately due to complexities in its material and motion. We present a new multi-part model of the eye that includes a morphable model of the facial eye region, as well as an anatomy-based eyeball model. It is the first morphable model that accurately captures eye region shape, since it was built from high-quality head scans. It is also the first to allow independent eyeball movement, since we treat it as a separate part. To showcase our model we present a new method for illumination- and head-pose–invariant gaze estimation from a single RGB image. We fit our model to an image through analysis-by-synthesis, solving for eye region shape, texture, eyeball pose, and illumination simultaneously. The fitted eyeball pose parameters are then used to estimate gaze direction. Through evaluation on two standard datasets we show that our method generalizes to both webcam and high-quality camera images, and outperforms a state-of-the-art CNN method achieving a gaze estimation accuracy of \(9.44^\circ \) in a challenging user-independent scenario.


Morphable model Gaze estimation Analysis-by-synthesis 

Supplementary material

Supplementary material 1 (mp4 18664 KB)


  1. 1.
    Kleinke, C.L.: Gaze and eye contact: a research review. Psychol. Bull. 100(1), 78–100 (1986)CrossRefGoogle Scholar
  2. 2.
    Baltrušaitis, T., Robinson, P., Morency, L.P.: Openface: an open source facial behavior analysis toolkit. In: IEEE WACV (2016)Google Scholar
  3. 3.
    Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE TPAMI 32, 478–500 (2010)CrossRefGoogle Scholar
  4. 4.
    Majaranta, P., Bulling, A.: Eye tracking and eye-based human-computer interaction. In: Fairclough, S.H., Gilleade, K. (eds.) Advances in Physiological Computing, pp. 39–65. Springer, New York (2014)CrossRefGoogle Scholar
  5. 5.
    Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: Appearance-based gaze estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4511–4520 (2015)Google Scholar
  6. 6.
    Wood, E., Baltrušaitis, T., Morency, L.P., Robinson, P., Bulling, A.: Learning an appearance-based gaze estimator from one million synthesised images. In: Proceedings of the ETRA (2016)Google Scholar
  7. 7.
    Wood, E., Baltrušaitis, T., Zhang, X., Sugano, Y., Robinson, P., Bulling, A.: Rendering of eyes for eye-shape registration and gaze estimation. In: ICCV (2015)Google Scholar
  8. 8.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: Conference on Computer Graphics and Interactive Techniques, ACM (1999)Google Scholar
  9. 9.
    Romdhani, S., Vetter, T.: Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In: Proceedings CVPR 2005, vol. 2, pp. 986–993. IEEE (2005)Google Scholar
  10. 10.
    Aldrian, O., Smith, W.A.: Inverse rendering of faces with a 3d morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 35(5), 1080–1093 (2013)CrossRefGoogle Scholar
  11. 11.
    Paysan, P., Knothe, R., Amberg, B., Romdhani, S., Vetter, T.: A 3d face model for pose and illumination invariant face recognition. In: Proceedings of the AVSS (2009)Google Scholar
  12. 12.
    Yi, D., Lei, Z., Li, S.: Towards pose robust face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 13539–3545 (2013)Google Scholar
  13. 13.
    Thies, J., Zollhöfer, M., Nießner, M., Valgaerts, L., Stamminger, M., Theobalt, C.: Real-time expression transfer for facial reenactment. ACM TOG 32, 40 (2015)Google Scholar
  14. 14.
    Cao, C., Weng, Y., Lin, S., Zhou, K.: 3d shape regression for real-time facial animation. ACM TOG (2013)Google Scholar
  15. 15.
    Baltrušaitis, T., Morency, L.P., Robinson, P.: Constrained local neural fields for robust facial landmark detection in the wild. In: IEEE ICCVW (2013)Google Scholar
  16. 16.
    Cootes, T.F., Edwards, G.J., Taylor, C.J., et al.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)CrossRefGoogle Scholar
  17. 17.
    Khamis, S., Taylor, J., Shotton, J., Keskin, C., Izadi, S., Fitzgibbon, A.: Learning an efficient model of hand shape variation from depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2540–2548 (2015)Google Scholar
  18. 18.
    Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: Scape: shape completion and animation of people. ACM Trans. Graph. (TOG) 24, 408–416 (2005). ACMCrossRefGoogle Scholar
  19. 19.
    Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B., Seidel, H.P.: A statistical model of human pose and body shape. In: Computer Graphics Forum, vol. 28, pp. 337–346. Wiley Online Library (2009)Google Scholar
  20. 20.
    Booth, J., Roussos, A., Zafeiriou, S., Ponniah, A., Dunaway, D.: A 3d morphable model learnt from 10,000 faces. In: Proceedings of the CVPR 2016 (2016)Google Scholar
  21. 21.
    Vlasic, D., Brand, M., Pfister, H., Popović, J.: Face transfer with multilinear models. ACM Trans. Graph. (TOG) 24, 426–433 (2005). ACMCrossRefGoogle Scholar
  22. 22.
    Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3d facial expression database for visual computing. TVGC 20(3), 413–425 (2014)Google Scholar
  23. 23.
    Bérard, P., Bradley, D., Gross, M., Beeler, T.: Lightweight eye capture using a parametric model. ACM Trans. Graph. (TOG) 35(4), 117 (2016)CrossRefGoogle Scholar
  24. 24.
    Bérard, P., Bradley, D., Nitti, M., Beeler, T., Gross, M.: Highquality capture of eyes. ACM Trans. Graph. 33, 1–12 (2014)CrossRefGoogle Scholar
  25. 25.
    Ferhat, O., Vilarino, F.: Low cost eye tracking: the current panorama. J. Comput. Intell. Neurosci. 22(23), 24Google Scholar
  26. 26.
    Schneider, T., Schauerte, B., Stiefelhagen, R.: Manifold alignment for person independent appearance-based gaze estimation. In: 2014 22nd International Conference on Pattern Recognition (ICPR), pp. 1167–1172. IEEE (2014)Google Scholar
  27. 27.
    Sugano, Y., Matsushita, Y., Sato, Y.: Learning-by-synthesis for appearance-based 3d gaze estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1821–1828 (2014)Google Scholar
  28. 28.
    Sesma, L., Villanueva, A., Cabeza, R.: Evaluation of pupil center-eye corner vector for gaze estimation using a web cam. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 217–220. ACM (2012)Google Scholar
  29. 29.
    Torricelli, D., Conforto, S., Schmid, M., DAlessio, A.: A neural-based remote eyegaze tracker under natural head motion. Comput. Methods Programs Inbiomed. 92(1), 66–78 (2008)CrossRefGoogle Scholar
  30. 30.
    Wood, E., Bulling, A.: Eyetab: Model-based gaze estimation on unmodified tablet computers. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 207–210. ACM (2014)Google Scholar
  31. 31.
    Wang, J., Sung, E., Venkateswarlu, R.: Eye gaze estimation from a single image of one eye. In: Ninth IEEE International Conference on Computer Vision, 2003, Proceedings, pp. 136–143. IEEE (2003)Google Scholar
  32. 32.
    Wu, H., Chen, Q., Wada, T.: Conic-based algorithm for visual line estimation from one image. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004, Proceedings, pp. 260–265. IEEE (2004)Google Scholar
  33. 33.
    Lu, F., Sugano, Y., Okabe, T., Sato, Y.: Adaptive linear regression for appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2033–2046 (2014)CrossRefGoogle Scholar
  34. 34.
    Mora, K., Odobez, J.M.: Geometric generative gaze estimation (g3e) for remote rgb-d cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1773–1780 (2014)Google Scholar
  35. 35.
    Jimenez, J., Danvoye, E., von der Pahlen, J.: Photorealistic eyes rendering. In: SIGGRAPH Talks, Advances in Real-Time Rendering, ACM (2012)Google Scholar
  36. 36.
    Malbouisson, J.M., e Cruz, A.A.V., Messias, A., Leite, L.V., Rios, G.D.: Upper and lower eyelid saccades describe a harmonic oscillator function. Invest. Ophthalmol. Vis. Sci. 46(3), 857–862 (2005)CrossRefGoogle Scholar
  37. 37.
    Liu, D.C., Nocedal, J.: On the limited memory bfgs method for large scale optimization. Math. Program. 45(1–3), 503–528 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Riedmiller, M., Braun, H.: Rprop-a fast adaptive learning algorithm. In: Proceedings of ISCIS VII), Universitat, Citeseer (1992)Google Scholar
  39. 39.
    Funes Mora, K.A., Monay, F., Odobez, J.M.: EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. In: Proceedings of the ETRA (2014)Google Scholar
  40. 40.
    Smith, B., Yin, Q., Feiner, S., Nayar, S.: Gaze locking: passive eye contactdetection for humanobject interaction. In: ACM Symposium on User InterfaceSoftware and Technology (UIST), pp. 271–280, Oct 2013Google Scholar
  41. 41.
    Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2face: real-time face capture and reenactment of rgb videos. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR), p. 1. IEEE (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Erroll Wood
    • 1
    Email author
  • Tadas Baltrušaitis
    • 2
  • Louis-Philippe Morency
    • 2
  • Peter Robinson
    • 1
  • Andreas Bulling
    • 3
  1. 1.University of CambridgeCambridgeUK
  2. 2.Carnegie Mellon UniversityPittsburghUSA
  3. 3.Max Planck Institute for InformaticsSaarbrückenGermany

Personalised recommendations